aggregation transformer
Cross Aggregation Transformer for Image Restoration
Recently, Transformer architecture has been introduced into image restoration to replace convolution neural network (CNN) with surprising results. Considering the high computational complexity of Transformer with global attention, some methods use the local square window to limit the scope of self-attention. However, these methods lack direct interaction among different windows, which limits the establishment of long-range dependencies. To address the above issue, we propose a new image restoration model, Cross Aggregation Transformer (CAT). The core of our CAT is the Rectangle-Window Self-Attention (Rwin-SA), which utilizes horizontal and vertical rectangle window attention in different heads parallelly to expand the attention area and aggregate the features cross different windows. We also introduce the Axial-Shift operation for different window interactions. Furthermore, we propose the Locality Complementary Module to complement the self-attention mechanism, which incorporates the inductive bias of CNN (e.g., translation invariance and locality) into Transformer, enabling global-local coupling. Extensive experiments demonstrate that our CAT outperforms recent state-of-the-art methods on several image restoration applications.
Multi-agent Reinforcement Learning Paper Reading UPDeT
If you are a freshman in the field of multi-agent reinforcement learning, the below links are all famous multi-agent reinforcement learning papers that I shared before. These papers are all about factorization in multi-agent problems, therefore, I believe you can learn more about multi-agent reinforcement learning before reading this article!!! Transfer learning has been widely used in many different machine learning fields, such as computer vision(object recognition, classification, etc) and natural language processing(translation, semantic analysis, etc), and has shown that transfer learning can significantly improve training efficiency. However, there is only a few research trying to apply transfer learning in multi-agent reinforcement learning problems. Recent advances in multi-agent reinforcement learning have largely limited training one model from scratch for every new task. This limitation occurs due to the restriction of the model architecture related to fixed input and output dimensions, which hinder the experience accumulation and transfer of the learned agent over tasks across diverse levels of difficulty.